Search Results: "Matthias Klumpp"

7 September 2015

Matthias Klumpp: Update notifications in Debian Jessie

Piwik told me that people are still sharing my post about the state of GNOME-Software and update notifications in Debian Jessie. So I thought it might be useful to publish a small update on that matter: I hope that clarifies things a little. Have fun using Debian 8!
UPDATE: It appears that many people have problems with getting update notifications in GNOME on Jessie. If you are affected by this, please try the following:
  1. Open dconf-editor and navigate to org.gnome.settings-daemon.plugins.updates. Check if the key active is set to true
  2. If that doesn t help, also check if at org.gnome.settings-daemon.plugins.updates the frequency-refresh-cache value is set to a sane value (e.g. 86400)
  3. Consider increasing the priority value (if it isn t at 300 already)
  4. If all of that doesn t help: I guess some internal logic in g-s-d is preventing a cache refresh then (e.g. because I thinks it is on a expensive network connection and therefore doesn t refresh the cache automatically, or thinks it is running on battery). This is a bug. If that still happens on Debian Stretch with GNOME-Software, please report a bug against the gnome-software package. As a workaround for Jessie you can enable unconditional cache refreshing via the APT cronjob by installing apt-config-auto-update.

11 August 2015

Matthias Klumpp: AppStream/DEP-11 for everyone! (beta!)

One of the things we discussed at this year s Akademy conference is making AppStream work on Kubuntu.appstream-logo On Debian-based systems, we use a YAML-based implementation of AppStream, called DEP-11 . DEP-11 exists for historical reasons (the DEP-11 YAML format was a superset of AppStream once) and because YAML, unlike XML, is one accepted file format by the Debian FTPMasters team, who we want to have on board when adding support for AppStream. So I ve spent the last few days on setting up the DEP-11 generator for Kubuntu, as well as improving it greatly to produce more meaningful error messages and to generate better output. It became a bit slower in the process, but the greatly improved diagnostics data is worth it. For example, maintainers of Debian packages will now get a more verbose explanation of issues found with metadata in their packages, making them easier to fix for people who didn t get in contact with AppStream yet. At time, we generate AppStream metadata for Tanglu, Kubuntu and Debian, but so far only Tanglu makes real use of it by shipping it in a .deb package. Shipping the data as package is only a workaround though, for a proper implementation, the data will be downloaded by Apt. To achieve that, the data needs to reach the archive first, which is something that I can hopefully discuss and implement with the FTPMasters team of Debian at this year s Debconf. When this is done, the new metadata will automatically become available in tools like GNOME-Software or Muon Discover. How can I see if there are issues with my package? The dep11-generator tool will return HTML pages to show both the extracted metadata, as well as issues with it. You can find the information for the respective distribution here: Each issue tag will contain a detailed explanation of what went wrong. Errors generally lead to ignoring the metadata, so it will not be processed. Warnings usually concern things which might reduce the amount of metadata or make it less useful, while Info-type hints contain information on how to improve the metadata or make it more useful. Can I use the data already? Yes, you can. You just need to place the compressed YAML files in /var/cache/app-info/yaml and the icons in /var/cache/app-info/icons/<suite>-<component>/<size>, for example: /var/cache/app-info/icons/jessie-amd64/64 64 I think I found a bug in the generator In that case, please report the issue against the appstream-dep11 package at Debian, or file an issue at Github.. The only reason why I announce this feature now is to find remaining generator bugs, before officially announcing the feature on debian-devel-announce. When will this be officially announced? I want to give this feature a little bit more testing, and ideally have the integration into the archive ready, so people can see how the metadata looks like when rendered in GNOME-Software/Discover. I also want to write a bit more documentation to help Debian developers and upstreams to improve their metadata. Ideally, I also want to incorporate some feedback at Debconf when announcing full AppStream support in Debian. So take all the stuff you ve read above as a little sneak peek  Debconf15goingI will also give a talk at Debconf, titled AppStream, Limba, XdgApp Where we are going. The aim of this talk is to give an insight i tnto the new developments happening in the software distribution area, and what the goal of these different projects is (the talk should give an overview of what s in the oven and how it will impact Debian). So if you are interested, please drop by :-) Maybe setting up a BOF would also be a good idea.

10 August 2015

Matthias Klumpp: Akademy 2015

I am very late with this, but I still wanted to write a few words about Akademy 2015. First of all: It was an awesome conference! Meeting all the great people involved with KDE and seeing who I am working with (as in: face to face, not via email) was awesome. We had some very interesting discussions on a wide variety of topics, and also enjoyed quite some beer together. Particularly important to me was of course talking to Daniel Vr til who is working on XdgApp, and Aleix Pol of Muon Discover fame.
Also meeting with the other Kubuntu members was awesome I haven t seen some of them for about 3 years, and also met many cool people for the first time. My talk on AppStream/Limba went well, except that I got slightly confused by the timer showing that I had only 2 minutes left, after I had just completed the first half of my talk. It turned out that the timer was wrong  Another really nice aspect was to be able to get an insight into areas where I am usually not involved with, like visual design. It was really interesting to learn about the great work others are doing and to talk to people about their work and I also managed to scratch an itch in the Systemsettings application, where three categories had shown the same icon. Now Systemsettings looks like it is supposed to be, finally :-) The only thing I could maybe complain about was the weather, which was more Scotland/Wales like than Spanish but that didn t stop us at all, not even at the social event outside. So I actually don t complain  We also managed to discuss some new technical stuff, like AppStream for Kubuntu, and plenty of other things that I ll write about in a separate blog post. Generally, I got so many new impressions from this year s Akademy, that I could write a 10-pages long blogpost about it while still having to leave out things.

Kudos to the organizers of this Akademy, you did a fantastic job! I also want to thank the Ubuntu community for funding my flight, and the Kubuntu community for pushing me a little to attend :-). KDE is a great community with people driving the development of Free Software forward. The diversity of people, projects and ideas in KDE is a pleasure, and I am very happy to be part of this community. Akademy2015

23 May 2015

DebConf team: Second Call for Proposals and Approved Talks for DebConf15 (Posted by DebConf Content Team)

DebConf15 will be held in Heidelberg, Germany from the 15th to the 22nd of August, 2015. The clock is ticking and our annual conference is approaching. There are less than three months to go, and the Call for Proposals period closes in only a few weeks. This year, we are encouraging people to submit half-length 20-minute events, to allow attendees to have a broader view of the many things that go on in the project in the limited amount of time that we have. To make sure that your proposal is part of the official DebConf schedule you should submit it before June 15th. If you have already sent your proposal, please log in to summit and make sure to improve your description and title. This will help us fit the talks into tracks, and devise a cohesive schedule. For more details on how to submit a proposal see: http://debconf15.debconf.org/proposals.xhtml. Approved Talks We have processed the proposals submitted up to now, and we are proud to announce the first batch of approved talks. Some of them: If you have already submitted your event and haven t heard from us yet, don t panic! We will contact you shortly. We would really like to hear about new ideas, teams and projects related to Debian, so do not hesitate to submit yours. See you in Heidelberg,
DebConf Team

30 March 2015

Matthias Klumpp: Limba Project: Another progress report

And once again, it s time for another Limba blogpost :-)limba-small Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution. Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions. I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already. So, it s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let s address one important general question first: How does Limba relate to the GNOME Sandboxing approach? (If you don t know about GNOMEs sandboxes, take a look at the GNOME Wiki Alexander Larsson also blogged about it recently) First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place. The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries. Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects. Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime). Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components. Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components. Do you have an example of a Limba-distributed application? Yes! I recently created a set of package for Neverball Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject. One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible. You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script. Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package. Creating a Limba package is trivial, it boils down to creating a simple control file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema) This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21: Limba-installed Neverball Which kernel do I need to run Limba? The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of. Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it). Building all these little Limba packages and keeping them up-to-date is annoying Yes indeed. I expect that we will see some bigger Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages. This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc. All of this is currently planned, and I can t say a lot more yet. Stay tuned! (As always: If you want to help, please contact me) Are the Limba interfaces stable? Can I use it already? The Limba package format should be stable by now since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don t think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible. For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository). tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports! Will there be integration into GNOME-Software and Muon? From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew they are just implementing AppStream, which is a prerequisite for having any support for Limba[1]. Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.
So, thanks for reading this (again too long) blogpost :) There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu! [1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well this might change in a few weeks. Also, Muon does pretty well already!

27 January 2015

Matthias Klumpp: AppStream 0.8 released!

Yesterday I released version 0.8 of AppStream, the cross-distribution standard for software metadata, that is currently used by GNOME-Software, Muon and Apper in to display rich metadata about applications and other software components. What s new? The new release contains some tweaks on AppStreams documentation, and extends the specification with a few more tags and refinements. For example we now recommend sizes for screenshots. The recommended sizes are the ones GNOME-Software already uses today, and it is a good idea to ship those to make software-centers look great, as others SCs are planning to use them as well. Normal sizes as well as sizes for HiDPI displays are defined. This change affects only the distribution-generated data, the upstream metadata is unaffected by this (the distro-specific metadata generator will resize the screenshots anyway). Another addition to the spec is the introduction of an optional <source_pkgname/> tag, which holds the source package name the packages defined in <pkgname/> tags are built from. This is mainly for internal use by the distributor, e.g. it can decide to use this information to link to internal resources (like bugtrackers, package-watch etc.). It may also be used by software-center applications as additional information to group software components. Furthermore, we introduced a <bundle/> tag for future use with 3rd-party application installation solutions. The tag notifies a software-installer about the presence of a 3rd-party application bundle, and provides the necessary information on how to install it. In order to do that, the software-center needs to support the respective installation solution. Currently, the Limba project and Xdg-App bundles are supported. For software managers, it is a good idea to implement support for 3rd-party app installers, as soon as the solutions are ready. Currently, the projects are worked on heavily. The new tag is currently already used by Limba, which is the reason why it depends on the latest AppStream release. How do I get it? All AppStream libraries, libappstream, libappstream-qt and libappstream-glib, are supporting the 0.8 specification in their latest version so in case you are using one of these, you don t need to do anything. For Debian, the DEP-11 spec is being updated at time, and the changes will land in the DEP-11 tools soon. Improve your metadata! This call goes especilly to many KDE projects! Getting good data is partly a task for the distributor, since packaging issues can result in incorrect or broken data, screenshots need to be properly resized etc. However, flawed upstream data can also prevent software from being shown, since software with broken data or missing data will not be incorporated in the distro XML AppStream data file. Richard Hughes of Fedora has created a nice overview of software failing to be included. You can see the failed-list here the data can be filtered by desktop environment etc. For KDE projects, a Comment= field is often missing in their .desktop files (or a <summary/> tag needs to be added to their AppStream upstream XML file). Keep in mind that you are not only helping Fedora by fixing these issues, but also all other distributions cosuming the metadata you ship upstream. For Debian, we will have a similar overview soon, since it is also a very helpful tool to find packaging issues. If you want to get more information on how to improve your upstream metadata, and how new metadata should look like, take a look at the quickstart guide in the AppStream documentation.

2 December 2014

Matthias Klumpp: How to build a cross-distro package with Limba

Disclaimer: Limba is stilllimba-small in a very early stage of development. Bugs happen, and I give to guarantees on API stability yet. Limba is a very simple cross-distro package installer, utilizing OverlayFS found in recent Linux kernels (>= 3.18). As example I created a small Limba package for one of the Qt5 demo applications, and I would like to share the process of creating Limba packages it s quite simple, and I could use some feedback on how well the resulting packages work on multiple distributions. I assume that you have compiled Limba and installed it how that is done is described in its README file. So, let s start. 1. Prepare your application The cool thing about Limba is that you don t really have to do many changes on your application. There are a few things to pay attention to, though: This needs to be done so your application will find its data at runtime. Additionally, you need to write an AppStream metadata file, and find out which stuff your application depends on. 2. Create package metadata & install software 1.1 Basics Now you can create the metadata necessary to build a Limba package. Just run
cd /path/to/my/project
lipkgen make-template
This will create a pkginstall directory, containing a control file and a metainfo.xml file, which can be a symlink to the AppStream metadata, or be new metadata. Now, configure your application with /opt/bundle as install prefix (-DCMAKE_INSTALL_PREFIX=/opt/bundle, prefix=/opt/bundle, etc.) and install it to the pkginstall/inst_target directory. 1.2 Handling dependencies If your software has dependencies on other packages, just get the Limba packages for these dependencies, or build new ones. Then place the resulting IPK packages in the pkginstall/repo directory. Ideally, you should be able to fetch Limba packages which contain the software components directly from their upstream developers. Then, open the pkginstall/control file and adjust the Requires line. The names of the components you depend on match their AppStream-IDs (<id/> tag in the AppStream XML document). Any version-relation (>=, >>, <<, <=, <>) is supported, and specified in brackets after the component-id. The resulting control-file might look like this:
Format-Version: 1.0

Requires: Qt5Core (>= 5.3), Qt5DBus (>= 5.3), libpng12
If the specified dependencies are in the repo/ subdirectory, these packages will get installed automatically, if your application package is installed. Otherwise, Limba depends on the user to install these packages manually there is no interaction with the distribution s package-manager (yet?). 3. Building the package In order to build your package, make sure the content in inst_target/ is up to date, then run
lipkgen build pkginstall/
This will build your package and output it in the pkginstall/ directory. 4. Testing the package You can now test your package, Just run
sudo lipa install package.ipk
Your software should install successfully. If you provided a .desktop file in $prefix/share/applications, you should find your application in your desktop s application-menu. Otherwise, you can run a binary from the command-line, just append the version of your package to the binary name (bash-comletion helps). Alternatively, you can use the runapp command, which lets you run any binary in your bundle/package, which is quite helpful for debugging (since the environment a Limba-installed application is run is different from the one of other applications). Example:
runapp $ component_id -$ version :/bin/binary-name
And that s it! :-) I used these steps to create a Limba package for the OpenGL Qt5 demo on Tanglu 2 (Bartholomea), and tested it on Kubuntu 15.04 (Vivid) with KDE, as well as on an up-to-date Fedora 21, with GNOME and without any Qt or KDE stuff installed: qt5demo-limba-kubuntuqt5demo-limba-fedora I encountered a few obstacles when building the packages, e.g. Qt5 initially didn t find the right QPA plugin that has been fixed by adjusting a config file in the Qt5Gui package. Also, on Fedora, a matching libpng was missing, so I included that as well. You can find the packages at Github, currently (but I am planning to move them to a different place soon). The biggest issue with Limba is at time, that it needs Linux 3.18, or an older kernel with OverlayFS support compiled in. Apart from that and a few bugs, the experience is quite smooth. As soon as I am sure there are now hidden fundamental issues, I can think of implementing more features, like signing packages and automatically updating them. Have fun playing around with Limba!

10 November 2014

Matthias Klumpp: Introducing Limba a software installer experiment

As some of you already know, since the larger restructuring in PackageKit for the 1.0 release, I am rethinking Listaller, the 3rd-party application installer for Linux systems, as well. During the past weeks, I was playing around with a lot of different ideas and code, to make installations of 3rd-party software easily possible on Linux, but also working together with the distribution package manager. I now have come up with an experimental project, which might achieve this. Motivation Many of you know Lennart s famous blogpost on how we put together Linux distributions. And he makes a lot of good and valid points there (in fact, I agree with his reasoning there). The proposed solution, however, is not something which I am very excited about, at least not for the use-case of installing a simple application[1]. Leaving things like the exclusive dependency on technology like Btrfs aside, the solution outlined by Lennart basically bypasses the distribution itself, instead of working together with it. This results in a duplication of installed libraries, making it harder to overview which versions of which software component are actually running on the system. There is also a risk for security holes due to libraries not being updated. The security issues are worked around by a superior sandbox, which still needs to be implemented (but will definitively come soon, maybe next year). I wanted to explore a different approach of managing 3rd-party applications on Linux systems, which allows sharing as much code as possible between applications. Limba Glick2 and Listaller concepts mergedlimba-small In order to allow easy creation of software packages, as well as the ability to share software between different 3rd-party applications, I took heavy inspiration from Alexander Larssons Glick2 project, combining it with ideas from the application-directory based Listaller. The result is Limba (named after Limba tree, not the voodoo spirit I needed some name starting with li to keep the prefix used in Listaller, and for a tool like this the name didn t really matter ;-) ). Limba uses OverlayFS to combine an application with its dependencies before running it, as well as mount namespaces and shared subtrees. Except for OverlayFS, which just landed in the kernel recently, all other kernel features needed by Limba are available for years now (and many distributions ship with OverlayFS on older kernels as well). How does it work? In order to to achieve separation of software, each software component is located in a separate container (= package). A software component can be an application, like Kate or GEdit, but also be a single shared library (openssl) or even a full runtime (KDE Frameworks 5 parts, GNOME 3). Each of these software components can be identified via AppStream metadata, which is just a few bits of XML. A Limba package can declare a dependency on any other software component. In case that software is available in the distribution s repositories, the version found there can be used. Otherwise, another Limba package providing the software is required. Limba packages can be provided from software repositories (e.g. provided by the distributor), or be nested in other packages. For example, imagine the software Kate requires a version of the Qt5 libraries, >= 5.2. The downloadable package for Kate can be bundled with that dependency, by including the Qt5 5.2 Limba package in the Kate package. In case another software is installed later, which also requires the same version of Qt, the already installed version will be used. Since the software components are located in separate directories under /opt/software, an application will not automatically find its dependencies, or be able to locate its own files. Therefore, each application has to be run by a special tool, which merges the directory trees of the main application and it s dependencies together using OverlayFS. This has the nice sideeffect that the main application could override files from its dependencies, if necessary. The tool also sets up a new mount namespace, so if the application is compiled with a certain prefix, it does not need to be relocatable to find its data files. At installation time, to achieve better system integration, certain files (like e.g. the .desktop file) are split out of the installed directory tree, so the newly installed application achieves almost full system integration. AQNAY* Can I use Limba now? Limba is an experiment. I like it very much, but it might happen that I find some issues with it and kill it off again. So, if you feel adventurous, you can compile the source code and use the example Foobar application to play around with Limba. Before it can be used in production (if at all), some more time is needed. I will publish documentation on how to test the project soon. Doesn t OverlayFS have a maximum stacking depth? Oh yes it has! The How does it work explanation doesn t tell the whole truth in that regard (mainly to keep the section small). In fact, Limba will generate a runtime for the newly installed software, which is a directory with links to the actual individual software components the runtime consists of. The runtime is identified by an UUID. This runtime is then mounted together with the respective applications using OverlayFS. This works pretty great, and also results in no dependency-resolution to be done immediately before an application is started. Than dependency stuff gives me a headache Admittedly, allowing dependencies adds a whole lot of complexity. Other approaches, like the one outlined by Lennart work around that (and there are good reasons for doing that as well). In my opinion, the dependency-sharing and de-duplication of software components, as well as the ability to use the components which are packaged by your Linux distribution is worth the extra effort. Can you give an overview of future plans for Limba? Sure, so here is the stuff which currently works: These features are planned for the new future: Remember that Limba is an experiment, still ;-) XKCD 927 Technically, I am replacing one solution with another one here, so the situation does not change at all ;-). But indeed, some duplicate work is done due to more people working in this area now on similar questions. But I think this is a good thing, because the solutions worked on are fundamentally different approaches, and by exploring multiple ways of doing things, we will come up with something great in the end. (XKCD reference) Doesn t the use of OverlayFS have an impact on the performance of software running with Limba? I ran some synthetic benchmarks and didn t notice any problems even the startup speed of Limba applications is only a few milliseconds slower than the startup of the raw native application. However, I will still have to run further tests to give a definitive answer on this. How do you solve ABI compatibility issues? This approach requires software to keep their ABI stable. But since software can have strict dependencies on a specific version of a software (although I d discourage that), even people who are worried about this issue can be happy. We are getting much better at tracking unwanted ABI breaks, and larger projects offer stable API/ABI during a major release cycle. For smaller dependencies, there are, as explained above, stricter dependencies. In summary, I don t think ABI incompatibilities will be a problem with this approach at least not more than they have been in general. (The libuild facilities from Listaller to minimize dependencies will still be present im Limba, of course) You are wrong because of $X! Please leave a comment in this case! I d love to discuss new ideas and find the limits of the Limba concept that s why I am writing C code afterall, since what looks great on paper might not work in reality or have issues one hasn t thought about before. So any input is welcomed! Conclusion Last but not least I want to thank Alexander Larsson for writing Glick2, which Limba is heavily inspired from, and for his patient replies to my emails. If Limba turns out to be a good idea, you can expect a few more blog posts about it soon.
* Answered questions nobody asked yet [1]: Don t get me wrong, I would like to have these ideas implemented they offer great value. But I think for simple software deployment, the solution is an overkill.

6 November 2014

Matthias Klumpp: The state of AppStream/GNOME-Software in Debian Jessie

or Why do I not see any update notifications on my brand-new Debian Jessie installation?? This is a short explanation of the status quo, and also explains the no update notifications issue in a slightly more detailed way, since I am already getting bug reports for that. As you might know, GNOME provides GNOME-Software for installation of applications via PackageKit. In order to work properly, GNOME-Software needs AppStream metadata, which is not yet available in Debian. There was a GSoC student working on the necessary code for that, but the code is not yet ready and doesn t produce good results yet. Therefore, I postponed AppStream integration to Jessie+1, with an option to include some metadata for GNOME and KDE to use via a normal .deb package. Then, GNOME was updated to 3.14. GNOME 3.14 moved lots of stuff into GNOME-Software, including the support for update-notifications (which have been in g-s-d before). GNOME-Software is also the only thing which can edit the application groups in GNOME-Shell, at least currently. So obviously, there was no a much stronger motivation to support GNOME-Software in Jessie. The appstream-glib library, which GNOME-Software uses exclusively to read AppStream metadata, didn t support the DEP-11 metadata format which Debian uses in place of the original AppSTream XML for a while, but does so in it s current development branch. So that component had to be packaged first. Later, GNOME-Software was uploaded to the archive as well, but still lacked the required metadata. That data was provided by me as a .deb package later, locally generated using the current code by my SoC student (the data isn t great, but better than nothing). So far with the good news. But there are multiple issues at time. First of all, the appstream-data package didn t pass NEW so far, due to it s complex copyright situation (nothing we can t resolve, since app-install-data, which appstream-data would replace, is in Debian as well). Also, GNOME-Software is exclusively using offline-updates (more information also on [1] and [2]) at time. This isn t always working at the moment, since I haven t had the time to test it properly and I didn t expect it to be used in Debian Jessie as well[3]. Furthermore, the offline-updates feature requires systemd (which isn t an issue in itself, I am quite fine with that, but people not using it will get unexpected results, unless someone does the work to implement offline-updates with sysvinit). Since we are in freeze at time, and obviously this stuff is not ready yet, GNOME is currently without update notifications and without a way to change the shell application groups. So, how can we fix this? One way would of course be to patch notification support back into g-s-d, if the new layout there allows doing that. But that would not give us the other features GNOME-Software provides, like application-group-editing. Implementing that differently and patching it to make it work would be more or at least the same amount of work like making GNOME-Software run properly. I therefore prefer getting GNOME-Software to run, at least with basic functionality. That would likely mean hiding things like the offline-update functionality, and using online-updates with GNOME-PackageKit instead. Obviously, this approach has it s own issues, like doing most of the work post-freeze, which kind of defeats the purpose of the freeze and would need some close coordination with the release-team. So, this is the status quo at time. It is kind of unfortunate that GNOME moved crucial functionality into a new component which requires additional integration work by the distributors so quickly, but that s something which isn t worth to talk about. We need a way forward to bring update-notifications back, and there is currently work going on to do that. For all Debian users: Please be patient while we resolve the situation, and sorry for the inconvenience. For all developers: If you would like to help, please contact me or Laurent Bigonville, there are some tasks which could use some help.
As a small remark: If you are using KDE, you are lucky Apper provides the notification support like it always did, and thanks to improvements in aptcc and PackageKit, it even is a bit faster now. For the Xfce and <other_desktop> people, you need to check if your desktop provides integration with PackageKit for update-checking. At least Xfce doesn t, but after GNOME-PackageKit removed support for it (which was moved to gnome-settings-daemon and now to GNOME-Software) nobody stepped up to implement it yet (so if you want to do it it s not super-complicated, but knowledge of C and GTK+ is needed). - [3]: It looks like dpkg tries to ask a debconf question for some reason, or an external tool like apt-listchanges is interfering with the process, which must run completely unsupervised. There is some debugging needed to resolve these Debian-specific issues.

10 October 2014

Matthias Klumpp: Listaller + Glick: Some new ideas

As you might know, due to invasive changes in PackageKit, I am currently rewriting the 3rd-party application installer Listaller. Since I am not the only one looking at the 3rd-party app-installation issue (there is a larger effort going on at GNOME, based on Lennarts ideas), it makes sense to redesign some concepts of Listaller. Currently, dependencies and applications are installed into directories in /opt, and Listaller contains some logic to make applications find dependencies, and to talk to the package manager to install missing things. This has some drawbacks, like the need to install an application before using it, the need for applications to be relocatable, and application-installations being non-atomic. Glick2 There is/was another 3rd-party app installer approach on the GNOME side, by Alexander Larsson, called Glick2. Glick uses application bundles (do you remember Klik from back in the days?) mounted via FUSE. This allows some neat features, like atomic installations and software upgrades, no need for relocatable apps and no need to install the application. However, it also has disadvantages. Quoting the introduction document for Glick2: Bundling isn t perfect, there are some well known disadvantages. Increased disk footprint is one, although current storage space size makes this not such a big issues. Another problem is with security (or bugfix) updates in bundled libraries. With bundled libraries its much harder to upgrade a single library, as you need to find and upgrade each app that uses it. Better tooling and upgrader support can lessen the impact of this, but not completely eliminate it. This is what Listaller does better, since it was designed to do a large effort to avoid duplication of code. Also, currently Glick doesn t have support for updates and software-repositories, which Listaller had. Combining Listaller and Glick ideas So, why not combine the ideas of Listaller and Glick? In order to have Glick share resources, the system needs to know which shared resources are available. This is not possible if there is one huge Glick bundle containing all of the application s dependencies. So I modularized Glick bundles to contain just one software component, which is e.g. GTK+ or Qt, GStreamer or could even be a larger framework (e.g. GNOME 3.14 Platform ). These components are identified using AppStream XML metadata, which allows them to be installed from the distributor s software repositories as well, if that is wanted. If you now want to deploy your application, you first create a Glick bundle for it. Then, in a second step, you bundle your application bundle with it s dependencies in one larger tarball, which can also be GPG signed and can contain additional metadata. The resulting metabundle will look like this: glick-libundle This doesn t look like we share resources yet, right? The dependencies are still bundled with the application requiring them. The trick lies in the installation step: While the application above can be executed right away without installing it, there will also be an option to install it. For the user, this will mean that the application shows up in GNOME-Shell s overview or KDEs Plasma launcher, gets properly registered with mimetypes and is if installed for all users available system-wide. Technically, this will mean that the application s main bundle is extracted and moved to a special location on the file system, so are the dependency-bundles. If bundles already exist, they will not be installed again, and the new application will simply use the existing software. Since the bundles contain information about their dependencies, the system is able to determine which software is needed and which can simply be deleted from the installation directories. If the application is started now, the bundles are combined and mounted, so the application can see the libraries it depends on. Additionally, this concept allows secure updates of applications and shared resources. The bundle metadata contains an URL which points to a bundle repository. If new versions are released, the system s auto-updater can automatically pick these up and install them this means e.g. the Qt bundle will receive security updates, even if the developer who shipped it with his/her app didn t think of updating it. Conclusion So far, no productive code exists for this I just have a proof-of-concept here. But I pretty much like the idea, and I am thinking about going further in that direction, since it allows deploying applications on the Linux desktop as well as deploying software on servers in a way which plays nice with the native package manager, and which does not duplicate much code (less risk of having not-updated libraries with security flaws around). However, there might be issues I haven t thought about yet. Also, it makes sense to look at GNOME to see how the whole 3rd-party app deployment issue develops. In case I go further with Listaller-NEXT, it is highly likely that it will make use of the ideas sketched above (comments and feedback are more than welcome!).

11 September 2014

Matthias Klumpp: Listaller: Back to the future!

Listaller-Logo (with text)It is time for another report on Listaller, the cross-distro 3rd-party package installer, which is now in development for depending how you count 5-6 years. This will become a longer post, so you might grab some coffee or tea ;-) The original idea The Listaller project was initially started with the goal to make application deployment on Linux distributions as simple as possible, by providing a unified package installation format and tools which make building apps for multiple distributions easier and deployment of updates simple. The key ideas were: The current state The current release of Listaller handles all of this with a plugin for PackageKit, the cross-distro package-management abstraction layer. It hooks into PackageKit and reads information passing through to the native distributor backend, and if it encounters Listaller software, it handles it appropriately. It can also inject update information. This results in all Listaller software being shown in any PackageKit frontends, and people can work with it just like if the packages were native packages. Listaller package installations are controlled by a machine policy, so the administrator can decide that e.g. only packages from a trusted source (= GPG signature in trusted database) can be installed. Dependencies can be pulled from the distributor s repositories, or optionally from external sources, like the PyPI. This sounds good on paper, but the current implementation has various problems. The issues The current Listaller approach has some problems. The biggest one lies in the future: Soon, there will be no PackageKit plugins anymore! PackageKit 1.0 will remove support for them, because they appear to be a major source for crashes, even the in-tree plugins cause problems. Also, the PackageKit service itself is currently being trimmed of unneeded features and less-used code. These changes in PackageKit are great and needed for the project (and I support these efforts), but they cause a pretty huge problem for Listaller: The project relies on the PackageKit plugin if used without it, you loose the system-integration part, which is one of the key concepts of Listaller, and a primary goal. But this issue is not the only one. There are more. One huge problem for Listaller is dependency-solving: It needs to know where to get software from in case it isn t installed already. And that has to be done in a cross-distributional way. This is an incredibly complex task, and Listaller contains lots of workarounds for various quirks. It contains so much hacks for distro-specific stuff, that it became really hard to understand. The Listaller dependency model also became very complex, because it tried to handle many corner-cases. This is bad, of course. But the workarounds weren t added for fun, but because it was assumed to be easier than to fixing the root cause, which would have required collaboration between distributors and some changes on the stack, which seemed unlikely to happen at the time the code was written. The systemd effort Also a thing which affects Listaller, is the latest push from the systemd team to allow cross-distro 3rd-party installations to happen. I definitively recommend reading the linked blogpost from Lennart, if you have some spare time! The identified problems are the same as for Listaller, but the solution they propose is completely different, and about three orders of magnitude more invasive than whatever the Listaller project had in mind (I make these numbers up, so don t ask!). There are also a few issues I see with Lennarts approach, I will probably go into detail about that in another blogpost (e.g. it requires multiple copies of a library lying around, where one version might have a security vulnerability, and another one doesn t it s hard to ensure everything is up to date and secure that way, even if you have a top-notch sandbox). I have great respect for the systemd crew and especially Lennart, and I hope them to succeed with their efforts. However, I also think Listaller can achieve a similar things with a less-invasive solution, at least for the 3rd-party app-installations (Listaller is one of the partial-fix solutions with strict focus, so not a direct competitor to the holistic systemd approach. Both solutions could happily live together.) A step into the future Some might have guessed it already: There are some bigger changes coming to Listaller! The most important one is that there will be no Listaller anymore, at least not in its old form. Since the current code relies heavily on the PackageKit plugin, and contains some ugly workarounds, it doesn t make much sense to continue working on it. Instead, I started the Listaller.NEXT project, which is a rewrite of Listaller in C. There are a some goals for the rewrite: I made a last release of the 0.5.x series of Listaller, to work with PackageKit 0.9.x the future lies in the C port. If you are using Listaller (and I know of people who do, for example some deploy statically-linked stuff on internal test-setups with it), stay tuned. The packaging format will stay mostly compatible with the current version, so you will not see many changes there (the plan is to freeze it very soon, so no backwards-incompatible changes are made anymore). The o.5.x series will receive critical bugfixes if necessary. Help needed! As always, there is help needed! Writing C is not that difficult ;-) But user feedback is welcome as well, in case you have an idea. The new code will be hosted on Github in the new listaller-next branch (currently not that much to find there). Long-term, we will completely migrate away from Launchpad. You can expect more blogposts about the Listaller concepts and progress in the next months (as soon as I am done with some AppStream-related things, which take priority).

16 August 2014

Matthias Klumpp: AppStream/DEP-11 Debian progress

There hasn t been a progress-report on DEP-11 for some time, but that doesn t mean there was no work going on on it. DEP-11 is Debian s implementation of AppStream, as well as an effort to enhance the metadata available about software in Debian. While initially, AppStream was only about applications, DEP-11 was designed with a larger scope, to collect data about libraries, binaries and things like Python modules. Now, since AppStream 0.6, DEP-11 and AppStream have essentially the same scope, with the difference of DEP-11 metadata being described in YAML, while official AppStream data is XML. That was due to a request by our ftpmasters team, which doesn t like XML (which is also not used anywhere in Debian, as opposed to YAML). But this doesn t mean that people will have to deal with the YAML file format: The libappstream library will just take DEP-11 data as another data source for it s Xapian database, allowing anything using libappstream to access that data just like the XML stuff. Richards libappstream-glib will also receive support for the DEP-11 format soon, filling it s in-memory data cache and enabling the use of GNOME-Software on Debian. So, what has been done so far? The past months, my Google Summer of Code student. Abhishek Bhattacharjee, was working hard to integrate DEP-11 support into dak, the Debian Archive Kit, which maintains the whole Debian archive. The result will be an additional metadata table in our internal Postgres database, storing detailed information about the software available in a Debian package, as well as Components-<arch>.yml.gz files in the Debian repositories. Dak will also produce an application icon-cache and a screenshots repository. During the time of the SoC, Abhishek focused mainly on the applications part of things, and less on the other components (like extracting data about Python modules or libraries) these things can easily be implemented later. The remaining steps will be to polish the code and make it merge-ready for Debian s dak (as soon as it has received enough testing, we will likely give it a try on the Tanglu Debian derivative). Following that, Apt will be extended to fetch the DEP-11 data on-demand on systems where it is useful (which is currently mostly desktop-systems) if you want to save a little bit of space, you will be able to disable downloading this extra metadata in Apt. From there, libappstream will take the data for it s Xapian db. This will lead to the removal of the much-hated (from ftpmasters and maintainers side) app-install-data package, which has not been updated for two years and only contains a small fraction of the metadata provided by DEP-11. What Debian will ultimately gain from this effort is support for software-centers like GNOME-Software, and improved support for tools like Apper and Muon in displaying applications. Long-term, with more metadata being available, It would be cool to add support for it to specialized package managers , like Python s pip, npm or gem, to make them fetch information about available distribution software and install that instead of their own copies from 3rd-party repositories, if possible. This should ultimatively lead to less code duplication on distributions and will likely result in fewer security issues, since the officially maintained and integrated distribution packages can easily be used, if possible. This is no attempt to make tools like pip obsolete, but an attempt to have the different tools installing software on your machine communicate better, instead of creating parallel worlds in terms of software management. Another nice sideeffect of more metadata will be options to search for tools handling mimetypes in the software repos (in case you can t open a file), smart software centers installing missing firmware, and automatic suggestions for developers which software they need to install in order to build a specific software package. Also, the data allows us to match software across distributions, for that, I will have some news soon (not sure how soon though, as I am currently in thesis-writing-mode, and therefore have not that much spare time). Since the goal is to have these features available on all distributions supporting AppStream, it will take longer to realize but we are on a good way. So, if you want some more information about my student s awesome work, you can read his blogpost about it. He will also be at Debconf 14 (Portland). (I can t make it this time, but I surely won t miss the next Debconf) Sadly, I only see a very small chance to have the basic DEP-11 stuff land in-time for Jessie (lots of review work needs to be done, and some more code needs to be written), but we will definitively have it in Jessie+1. A small example on how this data will look like can be found here a larger, actual file is available here. Any questions and feedback are highly appreciated.

16 July 2014

Matthias Klumpp: AppStream 0.7 specification and library released

appstream-logoToday I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs. Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what s new in AppStream 0.7: Specification changes Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language. A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications. The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:
$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system
(if you are using the cli tool) A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example The KDE Community , GNOME Developers or even the developer s full name. This value can be (optionally) translated and will be displayed in software-centers. An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification. Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs). Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied). As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags) Documentation updates In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution. Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have Quickstart pages in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future. We also have an explicit section dealing with the question How do I translate upstream metadata? now. More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop. AppStream GObject library and tools The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses I like it for tivoization protection, it s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS). On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit s desktop-file-database in the long term). The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format). The appstream-index tool received some changes, making it s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers. Additionally, the testsuite got improved and should now work on systems which do not have metadata installed. Of course, libappstream also implements all features of the new 0.7 specification. With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the pkgname property of AsComponent (which is now a string array and called pkgnames ). API level was bumped to 1. Appstream-Qt A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code! Special thanks to Sune Vuorela for his nice rework of the Qt library! And that s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

18 June 2014

Matthias Klumpp: Tanglu 2 (Bartholomea annulata) status update #1

bartholomea-annulata

Bartholomea annulata (c) Kevin Bryant

It is time for a new Tanglu update, which has been overdue for a long time now! Many things happened in Tanglu development, so here is just a short overview of what was done in the past months. Infrastructure Debile The whole Tanglu distribution is now built with Debile, replacing Jenkins, which was difficult to use for package building purposes (although Jenkins is great for other things). You can see the Tanglu builders in action at buildd.tg.o. The migration to Debile took a lot of time (a lot more than expected), and blocked the Bartholomea development at the beginning, but now it is working smoothly. Many thanks to all people who have been involved with making Debile work for Tanglu, especially Jon Severinsson. And of course many thanks to the Debile developers for helping with the integration, Sylvestre Ledru and of course Paul Tagliamonte. Archive Server Migration Those who read the tanglu-announce mailinglist know this already: We moved the main archive server stuff at archive.tg.o to to a new location, and to a very powerful machine. We also added some additional security measures to it, to prevent attacks. The previous machine is now being used for the bugtracker at bugs.tg.o and for some other things, including an archive mirror and the new Tanglu User Forums. See more about that below :-) Transitions There is huge ongoing work on package transitions. Take a look at our transition tracker and the staging migration log to get a taste of it. Merging with Debian Unstable is also going on right now, and we are working on merging some of the Tanglu changes which are useful for Debian as well (or which just reduce the diff to Tanglu) back to their upstream packages. Installer Work on the Tanglu Live-Installer, although badly needed, has not yet been started (it s a task ready for taking by anyone who likes to do it!) however, some awesome progress has been made in making the Debian-Installer work for Tanglu, which allows us to perform minimal installations of the Tanglu base systems and allows easier support of alternative Tanglu falvours. The work on d-i also uncovered a bug which appeared with the latest version of findutils, which has been reported upstream before Debian could run into it. This awesome progress was possible thanks to the work of Philip Mu kovac and Thomas Funk (in really hard debug sessions).
Tanglu ForumsTanglu Users We finally have the long-awaited Tanglu user forums ready! As discussed in the last meeting, a popular demand on IRC and our mailing lists was a forum or Stackexchange-like service for users to communicate, since many people can work better with that than with mailinglists. Therefore, the new English TangluUsers forum is now ready at TangluUsers.org. The forum software is in an alpha version though, so we might experience some bugs which haven t been uncovered in the testing period. We will watch how the software performs and then decide if we stick to it or maybe switch to another one. But so far, we are really happy with the Misago Forums, and our usage of it already led to the inclusion of some patches against Misago. It also is actively maintained and has an active community. Misc Thingstanglu logo pure KDE We will ship with at least KDE Applications 4.13, maybe some 4.14 things as well (if we are lucky, since Tanglu will likely be in feature-freeze when this stuff is released). The other KDE parts will remain on their latest version from the 4.x series. For Tanglu 3, we might update KDE SC 4.x to KDE Frameworks 5 and use Plasma 5 though. GNOME Due to the lack manpower on the GNOME flavor, GNOME will ship in the same version available in Debian Sid maybe with some stuff pulled from Experimental, where it makes sense. A GNOME flavor is planned to be available. Common infrastructure We currently run with systemd 208, but a switch to 210 is planned. Tanglu 2 also targets the X.org server in version 1.16. For more changes, stay tuned. The kernel release for Bartholomea is also not yet decided. Artwork Work on the default Tanglu 2 design has started as well any artwork submissions are most welcome! Tanglu joins the OIN The Tanglu project is now a proud member (licensee) of the Open Invention Network (OIN), which build a pool of defensive patents to protect the Linux ecosystem from companies who are trying to use patents against Linux. Although the Tanglu community does not fully support the generally positive stance the OIN has about software patents, the OIN effort is very useful and we agree with it s goal. Therefore, Tanglu joined the OIN as licensee.
And that s the stuff for now! If you have further questions, just join us on #tanglu or #tanglu-devel on Freenode, or write to our newly created forum! You can, as always, also subscribe to our mailinglists to get in touch.

26 May 2014

Matthias Klumpp: Want AppStream metadata for your KDE project? Act now!

And again, another KDE-AppStream post ;-) If you want to know more about AppStream metadata and why adding it to your project is a good idea, you might be interested in this blogpost (and several previous ones I wrote). Originally, my plan was to directly push metadata to most KDE projects. The problem is that there is no way to reach all maintainers and have them opt-out for getting metadata pushed to their repositories. There is also no technical policy for a KDE project, since KDE is really only about the community right now, and there are no technical criteria a project under the KDE umbrella has to fulfill (at least to my knowledge, in theory, even GTK+ projects are perfectly fine within KDE). Since I feel very uncomfortable in touching other people s repositories without sending them a note first, I think the best way forward is an opt-in approach. So, if you want your KDE project to ship metadata, follow these simple steps: 1. Check if there already is metadata for your project That s right we already have some metadata available. Checkout the kde-appstream-metadata-templates repository at Github. You can take the XML file from there, if you want. Just make sure that there are no invalid tags in the description field (no <a/> nodes allowed, for example the content is not HTML!), check if you have an SPDX-compliant <project_license/> tag, check if the provided public interfaces in the <provides/> tag match your project and also test if the URLs work. Then you can copy the modified AppStream metadata to your project. 2. Write new metadata How to write new metadata is described in detail at this TechBase Wiki page. Just follow the instructions. In case you need help or want me to push the metadata to your project if you don t have the time, you can also write me an email: matthias [ AT ] tenstral . net or alternatively file a bug against the Github project linked above. Don t forget to have CMake install your shiny new metadata into /usr/share/appdata/. All metadata you add to your project will automatically get translated by the KDE l10n scripts, no further action is required. So far, projects like Ark, K3b and Calligra are shipping metadata, and the number of AppStream-aware projects in KDE is growing constantly, which greatly improves their visibility in software centers, and will help distributions a lot in organizing installed software. If you have further questions, please ask! .-)

3 May 2014

Matthias Klumpp: Translated AppStream metadata for KDE!

kde-appstreamIt took some time, but now it s finally done: KDE has translations for AppStream upstream metadata! AppStream is a Freedesktop project to extend metadata about the software projects which is available in distributions, especially regarding applications. Distributions compile a metadata file from data collected from packages, .desktop files and possibly other information sources, and create an AppStream XML file from it, which is then directly or via a Xapian cache read by software-center-like applications such as GNOME-Software or KDEs Apper. Since the metadata available from current sources is not standardized and rather poor, upstream projects can ship small XML files, AppStream upstream metadata or AppData in short. These files contain additional information about a project, such as a long description and links to screenshots. They also provide hints about public interfaces a software provides, for example binaries and libraries, making it possible for distributors to give users exactly the right package name in case they are missing a software component. So, in order to represent graphical KDE applications like they deserve it in the new software centers making use of AppStream, we need to ship AppData files, with long descriptions, screenshots and a few URLs. But how can you create these metadata files? In case you want your graphical KDE app to ship an AppData file, there is now a help page on the Techbase Wiki which provides all information needed to get started! For non-visual stuff or software which just wants to publish it s provided interfaces with AppStream metadata, there is a dedicated page for that as well. Shipping metadata for non-GUI apps will help programmers to satisfy depedencies in order to compile new software, enhance bash-completion for missing binaries and provides some other neat stuff (take a look at this blogpost to get a taste of it). And if you want to read a FAQ about the metadata stuff and get the bigger picture, just go to the Techbase Wiki page about AppStream metadata as well. The pages are not 100% final, so if you have questions, please write me a mail and I ll update the pages, or simply correct/refine it by yourself (it s a wiki afterall). And now to the best thing: As soon as you ship an AppStream upstream metadata file (*.appdata.xml* for apps or *.metainfo.xml* for other stuff), the KDE l10n-script (Scripty!) will automatically start translating it, just like we already do with .desktop files. No further actions are necessary. I already have a large amount of metadata files here, partially auto-generated, which show that we have about 160+ applications in KDE which could get an AppData file, not counting any frameworks or other non-GUI stuff yet. Since that is a bit much to submit via Reviewboard (which I originally planned to do), I hope I can commit the changes directly to the respective repositories, where the maintainers can take a look at it and adjust it to their liking. If that idea does not receive approval, I will just publish a set of data at some place for the KDE app maintainers to take as a reference (the auto-generated stuff needs some fixup to be commit-ready (which I d do in case I can just commit changes)). Either way, it is safe now to write and ship AppData files in KDE projects! In order to get your stuff translated, it is necessary that you follow the AppStream 0.6 metadata specification, and not one of ther older revisions. You can easily detect 0.6 metadata by the <component> root node, instead of <application>, or by it having a metadata_license tag. We don t support the older versions simply because it s not necessary, as there were only two KDE projects shipping AppData before, which are now using 0.6 data as well. Since 0.6, the metadata XML format is guaranteed to be stable, and the only reason which could make me change it in an incompatible way is to prevent something as bad as the end of the world from happening (== won t happen ;-) ). You can find the full specification (upstream and distro data) here. All parsers are able to handle 0.6 data now, and the existing tools are almost all migrated already (might take a few months to hit the distributions though). So, happy metadata-writing! :-) Thanks to all people who helped with making this happen, and especially Burkhard L ck and Albert Astals Cid for their patch-review and help with integrating the necessary changes into the KDE l10n-script.

25 April 2014

Matthias Klumpp: AppStream 0.6.1 released a quick look at what you can do with the cli tool

Today I released AppStream and libappstream 0.6.1, which feature mostly bugfixes, so nothing incredibly exciting to see there (but this also means no API/ABI breaks). The release clarifies some paragraphs in the spec which people found confusing, and fixes a few issues (like one example in the docs not being valid AppStream metadata). As only spec extension, we introduce a priority property in distro metadata to allow metadata from one repository to override data shipped by another one. This is used (although with a similar syntax) in Fedora already to have placeholder data for non-free stuff, which gets overridden by the real metadata if a new application was added. In general, the property tag was added to make the answer to the question which data is preferred much less magic. The libappstream library got some new API to query component data in different ways, and I also brought back support for Vala (so if you missed the Vapi file: It s back now, although you have to manually enable this feature). The CLI tool also got some extensions to query AppStream data. Here is a brief introduction: First of all, we need to make sure the database is up-to-date, which should be the case already (it is rebuilt automatically):
$ sudo appstream-index --refresh
The database will only be rebuilt when necessary, if you want to force a rebuild anyway, use the force parameter. Now imagine we want to search for an app containing the word media (in description, keywords, summary, ):
$ appstream-index -s media
which will return:
Identifier: gnome-media-player.desktop [desktop-app]
Name: GNOME Media Player
Summary: A simple media player for GNOME
Package: gnome-media-player
----
Identifier: mediaplayer-app.desktop [desktop-app]
Name: Media Player
Summary: Media Player
Package: mediaplayer-app
----
Identifier: kde4__plasma-mediacenter.desktop [desktop-app]
Name: Plasma Media Center
Summary: A mediacenter user interface written with the Plasma framework
Package: plasma-mediacenter
----
etc.
If we already know the name if a .desktop-file or the ID of a component, we can have the tool print out information about the application, including which package it was installed from:
$ appstream-index --get lyx.desktop
If we want to see more details, including e.g. a screenshot URL and a longer description, we can pass details to the tool:
Identifier: lyx.desktop [desktop-app]
Name: LyX
Summary: An advanced document processor with the power of LaTeX.
Package: lyx-common
Homepage: http://www.lyx.org/
Icon: lyx.png
Description: LyX is a document processor that encourages an approach to writing
 based on the structure of your documents (WYSIWYM) and not simply
 their appearance (WYSIWYG).
 
 LyX combines the power and flexibility of TeX/LaTeX[...]
Sample Screenshot URL: http://alt.fedoraproject.org/pub/alt/screenshots/f21/source/lyx-ea535ddf18b5c7328c5e88d2cd2cbd8c.png
License: GPLv2+
(I truncated the results slightly ;-) ) Okay, so far so good. But now it gets really exciting (and this is a feature added with 0.6.1): We can now query a component by the items it provides. For example, I want to know which software provides the library libfoo.2:
appstream-index --what-provides -t lib -v libfoo.so.2
This also works with binaries, or Python modules:
appstream-index --what-provides -t bin -v apper
This stuff works distribution-agnostic, and as long as software ships upstream metadata with a valid <provides/> field, or the distributor adds it while generating AppStream distro metadata.
This means that software can as soon as we have sufficient metadata of this kind declare it s dependencies upstream in form of a simple text file, referencing the needed components to build and run it on any Linux distribution. Users can simply install missing stuff by passing that file to their package manager, which can look up the components->packaging mapping and versions and do the right thing in installing the dependencies. So basically, this allows things pip -r does for Python, but for any application (not only Python stuff), and based on the distributors package database. With the provides-items, we can also scan software to detect it s dependencies automatically (and have it in a distro-agnostic form directly). We can also easily search for missing mimetype-handlers, missing kernel-modules, missing firmware etc. to install it on-demand, making the system much smarter in handling it s dependencies. And users don t need to do search orgies to find the right component for a given task. Also on my todo list for the future, based on this feature: A small tool telling upstream authors which distribution has their application in which version, using just one command (and AppStream data from multiple distros). Also planned: A cross-distro information page showing which distros ship which library versions, Python modules and application versions (and also the support status of the distro), so developers know which library versions (or GCC versions etc.) they should at least support to make their application easily available on most distributions. As always, you can get the releases on Freedesktop, as well es the AppStream specification.

28 March 2014

Matthias Klumpp: Appstream: The next step

appstream-logoWith the release of GNOME-Software 3.12, it s also time for another update about what is going on behind the scenes of Appstream, the project providing all the data needed to make software-centers work. So here is the long overdue update :) If you have attended my FOSDEM talk or seen the slides, you know about the concept of component metadata to describe (almost) all software components which make up a Linux system, as well as their public interfaces they provide for users and other software to access. This metadata specification was originally designed as part of the Listaller project for use with the 3rd-party software installer.
But we already have the Appstream specification, which describes applications available on a distribution. And technically, applications are nothing but specialized components.
When I was asked about extending the Appstream spec with fonts and codecs, I decided to give up the Listaller components concept and merge it with the Appstream project, effectively creating a single source of metadata for all software in distributions. Appstream and components were separate, because Appstream was designed just for application metadata shown in software-centers, while component-metadata was more technical and not user-visible. This separation, however, did not reflect reality. Software centers want to show things components were designed for (codecs, fonts, ), and application meta-information also can contain technical items not shown in a software-center, but useful for other functionality (e.g. for mimehandler finding). So, if you like clear naming of things (like I do), you can now think of the upcoming Appstream release 0.6 as Appstream not being a stream of application metadata, but rather metadata provided for use in applications. ;-) Extending Appstream with components has a number of benefits:
First of all, users will soon get systems which are able to automatically install missing firmware, codecs, fonts and even libraries or Python-modules. This happens in a distro-agnostic way, so any application can request installations of missing components through Appstream and PackageKit without having to care about distribution details. For distributors, you will get more information about the software upstream provides: Which public interfaces does it make available? What is the upstream s homepage? Summary? Short release-descriptions? Exposed library symbols? etc. This data was available before, of course, but we made it machine-readable and standardized now. This will also allow us to match software across distributions quickly, allowing efficient exchange of patches between distributions, and answering questions upstream has like in which distribuions is my software available, and in which version? . Tracking upstream software is also easier now. Last but not least, the metadata format gives upstreams a small say in how their software gets packaged (= how it gets split / if there is a metapackage depending on the split parts). For upstream authors, it is much easier to maintain just a few Appstream XML metafiles, instead of adding a Listaller component file as well (which was not XML). Most data for creating the metainfo files is already available, some can even be filled in automatically by the build-system. With all the advantages this new system has comes a small disadvantage: These changes broke the backwards-compatibility of the Appstream spec, so to use Appstream data in version 0.6 or higher, you need to adjust your parsers. But of course you are using a library to access the data anyway, which already does that for you ;-) GNOME-Software 3.12 has almost complete support for the new (but still not released) Appstream 0.6 spec through it s libappstream-glib library. The libappstream library in Appstream s Git branch will get 0.6 support as well very soon. The only thing you need to do is adjust your code for the new library version, which broke it s API as well due to a rewrite in C (I hope that we can provide language bindings again very soon).
Soon, there will also be a Qt library for accessing Appstream, since some people expressed a demand for it (and working with GObject in Qt C++ code always feels a litle bit odd).
The 0.6 specification is still being fine-tuned to iron out some issues which might come up and mainly to improve the documentation and adjust all tools. It will be released in 2-3 weeks, and bring not only components support, but also lots of other small improvements and additions for better description of applications/components. In some cases, tag names have changed to have less ambiguous names. Thanks to Richard Hughes for helping with the spec our discussions about the Appstream spec are highly productive and in the end lead to a better specification. Also, the new 0.6 specification includes many extensions used in GNOME-Software by default now, which makes it easier for other distributions to provide them for GS. The current draft for the Appstream 0.6 specifications can be found here. Some details might still change before the final release. As soon as the spec is final, there will be a ML announcement for distributors as well (watch the XDG distributions list), so there is enough time for distributors to future-proof their Appstream distro-data generators (I will also describe the changes more detailed then). If you are confused now about the whole components stuff: Fear not! ;-) I will do some more detailed posts in a few weeks explaining upstream projects how to provide metadata properly (after the 0.6 release). I will also finish the wiki page for Appstream AppData in KDE projects, where we can then use 0.6-style data in KDE right away.

12 February 2014

Matthias Klumpp: AppStream/Listaller FOSDEM slides

FOSDEM is over, and it has been a great time! I talked to many people (thanks to full devrooms ;-) Graph Databases and Config management was crazy, JavaScript as well ) and there might be some cool new things to come soon, which we discussed there :-)
I also got some good and positive feedback on the projects I work on, and met many people from Debian, Kubuntu, KDE and GNOME (haven t seen some of them for almost 3 years) one of the best things of being at FOSDEM is, that you not only see people of your own kind for example, for me as Debian developer it was great to see Fedora people and discuss things with them, something which rarely happens at Debian conferences. Also, having GNOME and KDE closely together again (litterally, their stands were next to each other ) is something I missed since the last Desktop Summit in 2011. My talks were also good, except for the beamer-slides-technical-error at the beginning, which took quite some time (I blame KScreen ;-) ). In case you re interested in the slides, here they are: slides for FOSDEM 14 AppStream/Listaller talks. The slides can likely be understood without the talk, they are way too detailed (usually I only show images on slides, but that doesn t help people who can t see the talk ^^) I hope I can make it to FOSDEM 15 as well I ve been there only once, but it already is my favourite FOSS-conference (and I love Belgian waffles) ;-)

26 January 2014

Matthias Klumpp: Next stop: FOSDEM 14

After years of missing this conference, I will finally make it to Brussels this time! I will even have some talks, one about Listaller (in the lightning-talks section) and one about AppStream and Listaller in the Distributions track. The lightning-talk will explain why distributing (some) applications independent from the distribution makes sense, and how Listaller does this. I will also very briefly talk about the concepts behind Listaller and which tools it offers for application developers to create & package cross-distro apps. The AppStream & Listaller talk will be much more detailed. It will cover the rationale for AppStream, what AppData is good for and how AppData files relate to AppStream. I will also reflect the AppData adoption in GNOME/Fedora and why GNOME-Software is the right way to go forward in developing software-centers. It will of course also include our future plans for AppStream. On the Listaller side, I will talk about how Listaller is connected to AppStream and PackageKit and why distributions should ship a way to install cross-distro packaged apps at all. I will explain module-definitions and why they are useful. An overview of the internals of Listaller and it s system integration is also included, as well as how it can be compared to competing installation solutions. If you are at FOSDEM and have questions about AppStream/PackageKit/Listaller/Tanglu/Debian/etc., please ask them! ;-) See you there! FOSDEM

Next.

Previous.